有效的决策涉及将过去的经验和相关上下文信息与新型情况联系起来。在深入的强化学习中,主导范式是代理商摊销信息,通过训练损失的梯度下降来帮助决策进入其网络权重。在这里,我们采用了一种替代方法,其中代理可以利用大规模上下文敏感的数据库查找来支持其参数计算。这使代理商可以直接以端到端的方式学习,以利用相关信息来告知其输出。此外,代理可以通过简单地扩大检索数据集来了解新信息,而无需再进行重新培训。我们在GO中研究这种方法,这是一款具有挑战性的游戏,庞大的组合状态空间特权对与过去的体验进行了直接匹配。我们利用快速,大约最近的邻居技术来从数千万的专家示范状态中检索相关数据。参与此信息为简单地将这些示范作为训练轨迹而言,可以显着提高预测准确性和游戏性能,从而使大规模检索在加强学习剂中的价值提供了令人信服的演示。
translated by 谷歌翻译
人类语言学习者暴露于信息丰富的上下文敏感语言,但要大量的原始感觉数据。通过社会语言的使用和彩排和实践的内部过程,语言学习者能够建立高级的语义表示,以解释他们的看法。在这里,我们从人类中的“内在语音”过程中汲取灵感(Vygotsky,1934年),以更好地理解代理内语言在体现行为中的作用。首先,我们正式地将代理语音作为半监督问题,并开发了两种算法,这些算法能够以很少的标记语言数据进行视觉接地字幕。然后,我们通过实验计算不同量的标记数据的缩放曲线,并将数据效率与监督学习基线进行比较。最后,我们将演讲内部的语音纳入3D虚拟世界中运行的体现的移动操纵剂代理,并表明,只需多达150个附加图像标题,代理语音就可以操纵和回答有关的问题。一个没有任何相关任务经验的新对象(零射)。综上所述,我们的实验表明,对代理内部的语音进行建模有效,可以使体现的代理有效地学习新任务,而无需直接互动经验。
translated by 谷歌翻译
创建可以自然与人类互动的代理是人工智能(AI)研究中的共同目标。但是,评估这些互动是具有挑战性的:收集在线人类代理相互作用缓慢而昂贵,但更快的代理指标通常与交互式评估相关。在本文中,我们评估了这些现有评估指标的优点,并提出了一种新颖的评估方法,称为标准化测试套件(STS)。 STS使用从真实人类交互数据中挖掘出的行为方案。代理商请参阅重播方案上下文,接收指令,然后将控制权控制以脱机完成交互。记录这些代理的延续并将其发送给人类注释者以将其标记为成功或失败,并且根据其成功的连续性比例对代理进行排名。最终的ST是自然主义相互作用的快速,控制,可解释的和代表的。总的来说,STS巩固了我们许多标准评估指标中所需的许多值,从而使我们能够加速研究进展,以生产可以自然与人类互动的代理。可以在https://youtu.be/yr1tnggorgq上找到视频。
translated by 谷歌翻译
通常通过将许多输入张量汇总为单个表示形式来处理神经网络中神经网络中的处理集或其他无序的,潜在的变化大小的输入。尽管从简单的汇总到多头关注已经存在许多聚合方法,但从理论和经验的角度来看,它们的代表力都受到限制。在搜索主要功能更强大的聚合策略时,我们提出了一种基于优化的方法,称为平衡聚​​集。我们表明,许多现有的聚合方法可以作为平衡聚集的特殊情况恢复,并且在某些重要情况下,它效率更高。在许多现有的架构和应用中,平衡聚集可以用作置换式替换。我们在三个不同的任务上验证其效率:中值估计,班级计数和分子性质预测。在所有实验中,平衡聚集的性能都比我们测试的其他聚合技术更高。
translated by 谷歌翻译
It would be useful for machines to use computers as humans do so that they can aid us in everyday tasks. This is a setting in which there is also the potential to leverage large-scale expert demonstrations and human judgements of interactive behaviour, which are two ingredients that have driven much recent success in AI. Here we investigate the setting of computer control using keyboard and mouse, with goals specified via natural language. Instead of focusing on hand-designed curricula and specialized action spaces, we focus on developing a scalable method centered on reinforcement learning combined with behavioural priors informed by actual human-computer interactions. We achieve state-of-the-art and human-level mean performance across all tasks within the MiniWob++ benchmark, a challenging suite of computer control problems, and find strong evidence of cross-task transfer. These results demonstrate the usefulness of a unified human-agent interface when training machines to use computers. Altogether our results suggest a formula for achieving competency beyond MiniWob++ and towards controlling computers, in general, as a human would.
translated by 谷歌翻译
来自科幻小说的普通愿景是机器人将有一天居住在我们的物理空间中,感知世界,才能协助我们的物理劳动力,并通过自然语言与我们沟通。在这里,我们研究如何使用虚拟环境的简化设计如何与人类自然交互的人工代理。我们表明,与自我监督学习的模拟世界中的人类交互的模仿学习足以产生我们称之为MIA的多模式互动剂,这成功与非对抗人类互动75%的时间。我们进一步确定了提高性能的架构和算法技术,例如分层动作选择。完全,我们的结果表明,模仿多模态,实时人类行为可以提供具有丰富的行为的富含性的令人生意的和令人惊讶的有效手段,然后可以为特定目的进行微调,从而铺设基础用于培训互动机器人或数字助理的能力。可以在https://youtu.be/zfgrif7my找到MIA的行为的视频
translated by 谷歌翻译
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
translated by 谷歌翻译
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games -the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled -our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
translated by 谷歌翻译
Interacting with a complex world involves continual learning, in which tasks and data distributions change over time. A continual learning system should demonstrate both plasticity (acquisition of new knowledge) and stability (preservation of old knowledge). Catastrophic forgetting is the failure of stability, in which new experience overwrites previous experience. In the brain, replay of past experience is widely believed to reduce forgetting, yet it has been largely overlooked as a solution to forgetting in deep reinforcement learning. Here, we introduce CLEAR, a replay-based method that greatly reduces catastrophic forgetting in multi-task reinforcement learning. CLEAR leverages off-policy learning and behavioral cloning from replay to enhance stability, as well as on-policy learning to preserve plasticity. We show that CLEAR performs better than state-of-the-art deep learning techniques for mitigating forgetting, despite being significantly less complicated and not requiring any knowledge of the individual tasks being learned.
translated by 谷歌翻译
Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.
translated by 谷歌翻译